Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Nat Commun ; 15(1): 2681, 2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38538600

RESUMO

Ovarian cancer, a group of heterogeneous diseases, presents with extensive characteristics with the highest mortality among gynecological malignancies. Accurate and early diagnosis of ovarian cancer is of great significance. Here, we present OvcaFinder, an interpretable model constructed from ultrasound images-based deep learning (DL) predictions, Ovarian-Adnexal Reporting and Data System scores from radiologists, and routine clinical variables. OvcaFinder outperforms the clinical model and the DL model with area under the curves (AUCs) of 0.978, and 0.947 in the internal and external test datasets, respectively. OvcaFinder assistance led to improved AUCs of radiologists and inter-reader agreement. The average AUCs were improved from 0.927 to 0.977 and from 0.904 to 0.941, and the false positive rates were decreased by 13.4% and 8.3% in the internal and external test datasets, respectively. This highlights the potential of OvcaFinder to improve the diagnostic accuracy, and consistency of radiologists in identifying ovarian cancer.


Assuntos
Neoplasias Ovarianas , Feminino , Humanos , Neoplasias Ovarianas/diagnóstico por imagem , Área Sob a Curva , Extremidades , Radiologistas , Estudos Retrospectivos
2.
IEEE Trans Med Imaging ; PP2024 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-38215335

RESUMO

Deep learning (DL)-based rib fracture detection has shown promise of playing an important role in preventing mortality and improving patient outcome. Normally, developing DL-based object detection models requires a huge amount of bounding box annotation. However, annotating medical data is time-consuming and expertise-demanding, making obtaining a large amount of fine-grained annotations extremely infeasible. This poses a pressing need for developing label-efficient detection models to alleviate radiologists' labeling burden. To tackle this challenge, the literature on object detection has witnessed an increase of weakly-supervised and semi-supervised approaches, yet still lacks a unified framework that leverages various forms of fully-labeled, weakly-labeled, and unlabeled data. In this paper, we present a novel omni-supervised object detection network, ORF-Netv2, to leverage as much available supervision as possible. Specifically, a multi-branch omni-supervised detection head is introduced with each branch trained with a specific type of supervision. A co-training-based dynamic label assignment strategy is then proposed to enable flexible and robust learning from the weakly-labeled and unlabeled data. Extensive evaluation was conducted for the proposed framework with three rib fracture datasets on both chest CT and X-ray. By leveraging all forms of supervision, ORF-Netv2 achieves mAPs of 34.7, 44.7, and 19.4 on the three datasets, respectively, surpassing the baseline detector which uses only box annotations by mAP gains of 3.8, 4.8, and 5.0, respectively. Furthermore, ORF-Netv2 consistently outperforms other competitive label-efficient methods over various scenarios, showing a promising framework for label-efficient fracture detection. The code is available at: https://github.com/zhizhongchai/ORF-Net/tree/main.

3.
Radiol Artif Intell ; 4(5): e210299, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36204545

RESUMO

Purpose: To evaluate the ability of fine-grained annotations to overcome shortcut learning in deep learning (DL)-based diagnosis using chest radiographs. Materials and Methods: Two DL models were developed using radiograph-level annotations (disease present: yes or no) and fine-grained lesion-level annotations (lesion bounding boxes), respectively named CheXNet and CheXDet. A total of 34 501 chest radiographs obtained from January 2005 to September 2019 were retrospectively collected and annotated regarding cardiomegaly, pleural effusion, mass, nodule, pneumonia, pneumothorax, tuberculosis, fracture, and aortic calcification. The internal classification performance and lesion localization performance of the models were compared on a testing set (n = 2922); external classification performance was compared on National Institutes of Health (NIH) Google (n = 4376) and PadChest (n = 24 536) datasets; and external lesion localization performance was compared on the NIH ChestX-ray14 dataset (n = 880). The models were also compared with radiologist performance on a subset of the internal testing set (n = 496). Performance was evaluated using receiver operating characteristic (ROC) curve analysis. Results: Given sufficient training data, both models performed similarly to radiologists. CheXDet achieved significant improvement for external classification, such as classifying fracture on NIH Google (CheXDet area under the ROC curve [AUC], 0.67; CheXNet AUC, 0.51; P < .001) and PadChest (CheXDet AUC, 0.78; CheXNet AUC, 0.55; P < .001). CheXDet achieved higher lesion detection performance than CheXNet for most abnormalities on all datasets, such as detecting pneumothorax on the internal set (CheXDet jackknife alternative free-response ROC [JAFROC] figure of merit [FOM], 0.87; CheXNet JAFROC FOM, 0.13; P < .001) and NIH ChestX-ray14 (CheXDet JAFROC FOM, 0.55; CheXNet JAFROC FOM, 0.04; P < .001). Conclusion: Fine-grained annotations overcame shortcut learning and enabled DL models to identify correct lesion patterns, improving the generalizability of the models.Keywords: Computer-aided Diagnosis, Conventional Radiography, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Localization Supplemental material is available for this article © RSNA, 2022.

4.
Radiol Artif Intell ; 3(5): e200248, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34617026

RESUMO

PURPOSE: To evaluate the performance of a deep learning-based algorithm for automatic detection and labeling of rib fractures from multicenter chest CT images. MATERIALS AND METHODS: This retrospective study included 10 943 patients (mean age, 55 years; 6418 men) from six hospitals (January 1, 2017 to December 30, 2019), which consisted of patients with and without rib fractures who underwent CT. The patients were separated into one training set (n = 2425), two lesion-level test sets (n = 362 and 105), and one examination-level test set (n = 8051). Free-response receiver operating characteristic (FROC) score (mean sensitivity of seven different false-positive rates), precision, sensitivity, and F1 score were used as metrics to assess rib fracture detection performance. Area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were employed to evaluate the classification accuracy. The mean Dice coefficient and accuracy were used to assess the performance of rib labeling. RESULTS: In the detection of rib fractures, the model showed an FROC score of 84.3% on test set 1. For test set 2, the algorithm achieved a detection performance (precision, 82.2%; sensitivity, 84.9%; F1 score, 83.3%) comparable to three radiologists (precision, 81.7%, 98.0%, 92.0%; sensitivity, 91.2%, 78.6%, 69.2%; F1 score, 86.1%, 87.2%, 78.9%). When the radiologists used the algorithm, the mean sensitivity of the three radiologists showed an improvement (from 79.7% to 89.2%), with precision achieving similar performance (from 90.6% to 88.4%). Furthermore, the model achieved an AUC of 0.93 (95% CI: 0.91, 0.94), sensitivity of 87.9% (95% CI: 83.7%, 91.4%), and specificity of 85.3% (95% CI: 74.6%, 89.8%) on test set 3. On a subset of test set 1, the model achieved a Dice score of 0.827 with an accuracy of 96.0% for rib segmentation. CONCLUSION: The developed deep learning algorithm was capable of detecting rib fractures, as well as corresponding anatomic locations on CT images.Keywords CT, Ribs© RSNA, 2021.

5.
Med Image Anal ; 70: 102010, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33677262

RESUMO

Convolutional neural networks have achieved prominent success on a variety of medical imaging tasks when a large amount of labeled training data is available. However, the acquisition of expert annotations for medical data is usually expensive and time-consuming, which poses a great challenge for supervised learning approaches. In this work, we proposed a novel semi-supervised deep learning method, i.e., deep virtual adversarial self-training with consistency regularization, for large-scale medical image classification. To effectively exploit useful information from unlabeled data, we leverage self-training and consistency regularization to harness the underlying knowledge, which helps improve the discrimination capability of training models. More concretely, the model first uses its prediction for pseudo-labeling on the weakly-augmented input image. A pseudo-label is kept only if the corresponding class probability is of high confidence. Then the model prediction is encouraged to be consistent with the strongly-augmented version of the same input image. To improve the robustness of the network against virtual adversarial perturbed input, we incorporate virtual adversarial training (VAT) on both labeled and unlabeled data into the course of training. Hence, the network is trained by minimizing a combination of three types of losses, including a standard supervised loss on labeled data, a consistency regularization loss on unlabeled data, and a VAT loss on both labeled and labeled data. We extensively evaluate the proposed semi-supervised deep learning methods on two challenging medical image classification tasks: breast cancer screening from ultrasound images and multi-class ophthalmic disease classification from optical coherence tomography B-scan images. Experimental results demonstrate that the proposed method outperforms both supervised baseline and other state-of-the-art methods by a large margin on all tasks.


Assuntos
Neoplasias da Mama , Aprendizado de Máquina Supervisionado , Feminino , Humanos , Redes Neurais de Computação , Tomografia de Coerência Óptica , Ultrassonografia
6.
Med Image Anal ; 69: 101955, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33588122

RESUMO

Cervical cancer has been one of the most lethal cancers threatening women's health. Nevertheless, the incidence of cervical cancer can be effectively minimized with preventive clinical management strategies, including vaccines and regular screening examinations. Screening cervical smears under microscope by cytologist is a widely used routine in regular examination, which consumes cytologists' large amount of time and labour. Computerized cytology analysis appropriately caters to such an imperative need, which alleviates cytologists' workload and reduce potential misdiagnosis rate. However, automatic analysis of cervical smear via digitalized whole slide images (WSIs) remains a challenging problem, due to the extreme huge image resolution, existence of tiny lesions, noisy dataset and intricate clinical definition of classes with fuzzy boundaries. In this paper, we design an efficient deep convolutional neural network (CNN) with dual-path (DP) encoder for lesion retrieval, which ensures the inference efficiency and the sensitivity on both tiny and large lesions. Incorporated with synergistic grouping loss (SGL), the network can be effectively trained on noisy dataset with fuzzy inter-class boundaries. Inspired by the clinical diagnostic criteria from the cytologists, a novel smear-level classifier, i.e., rule-based risk stratification (RRS), is proposed for accurate smear-level classification and risk stratification, which aligns reasonably with intricate cytological definition of the classes. Extensive experiments on the largest dataset including 19,303 WSIs from multiple medical centers validate the robustness of our method. With high sensitivity of 0.907 and specificity of 0.80 being achieved, our method manifests the potential to reduce the workload for cytologists in the routine practice.


Assuntos
Neoplasias do Colo do Útero , Esfregaço Vaginal , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Medição de Risco , Neoplasias do Colo do Útero/diagnóstico por imagem
7.
IEEE Trans Cybern ; 50(9): 3950-3962, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31484154

RESUMO

Histopathology image analysis serves as the gold standard for cancer diagnosis. Efficient and precise diagnosis is quite critical for the subsequent therapeutic treatment of patients. So far, computer-aided diagnosis has not been widely applied in pathological field yet as currently well-addressed tasks are only the tip of the iceberg. Whole slide image (WSI) classification is a quite challenging problem. First, the scarcity of annotations heavily impedes the pace of developing effective approaches. Pixelwise delineated annotations on WSIs are time consuming and tedious, which poses difficulties in building a large-scale training dataset. In addition, a variety of heterogeneous patterns of tumor existing in high magnification field are actually the major obstacle. Furthermore, a gigapixel scale WSI cannot be directly analyzed due to the immeasurable computational cost. How to design the weakly supervised learning methods to maximize the use of available WSI-level labels that can be readily obtained in clinical practice is quite appealing. To overcome these challenges, we present a weakly supervised approach in this article for fast and effective classification on the whole slide lung cancer images. Our method first takes advantage of a patch-based fully convolutional network (FCN) to retrieve discriminative blocks and provides representative deep features with high efficiency. Then, different context-aware block selection and feature aggregation strategies are explored to generate globally holistic WSI descriptor which is ultimately fed into a random forest (RF) classifier for the image-level prediction. To the best of our knowledge, this is the first study to exploit the potential of image-level labels along with some coarse annotations for weakly supervised learning. A large-scale lung cancer WSI dataset is constructed in this article for evaluation, which validates the effectiveness and feasibility of the proposed method. Extensive experiments demonstrate the superior performance of our method that surpasses the state-of-the-art approaches by a significant margin with an accuracy of 97.3%. In addition, our method also achieves the best performance on the public lung cancer WSIs dataset from The Cancer Genome Atlas (TCGA). We highlight that a small number of coarse annotations can contribute to further accuracy improvement. We believe that weakly supervised learning methods have great potential to assist pathologists in histology image diagnosis in the near future.


Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares , Aprendizado de Máquina Supervisionado , Histocitoquímica , Humanos , Neoplasias Pulmonares/química , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia
8.
Med Image Anal ; 58: 101549, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31499320

RESUMO

The whole slide histopathology images (WSIs) play a critical role in gastric cancer diagnosis. However, due to the large scale of WSIs and various sizes of the abnormal area, how to select informative regions and analyze them are quite challenging during the automatic diagnosis process. The multi-instance learning based on the most discriminative instances can be of great benefit for whole slide gastric image diagnosis. In this paper, we design a recalibrated multi-instance deep learning method (RMDL) to address this challenging problem. We first select the discriminative instances, and then utilize these instances to diagnose diseases based on the proposed RMDL approach. The designed RMDL network is capable of capturing instance-wise dependencies and recalibrating instance features according to the importance coefficient learned from the fused features. Furthermore, we build a large whole-slide gastric histopathology image dataset with detailed pixel-level annotations. Experimental results on the constructed gastric dataset demonstrate the significant improvement on the accuracy of our proposed framework compared with other state-of-the-art multi-instance learning methods. Moreover, our method is general and can be extended to other diagnosis tasks of different cancer types based on WSIs.


Assuntos
Aprendizado Profundo , Neoplasias Gástricas/patologia , Calibragem , Conjuntos de Dados como Assunto , Diagnóstico Diferencial , Humanos , Coloração e Rotulagem , Neoplasias Gástricas/diagnóstico por imagem
9.
IEEE Trans Med Imaging ; 38(2): 550-560, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30716025

RESUMO

Automated detection of cancer metastases in lymph nodes has the potential to improve the assessment of prognosis for patients. To enable fair comparison between the algorithms for this purpose, we set up the CAMELYON17 challenge in conjunction with the IEEE International Symposium on Biomedical Imaging 2017 Conference in Melbourne. Over 300 participants registered on the challenge website, of which 23 teams submitted a total of 37 algorithms before the initial deadline. Participants were provided with 899 whole-slide images (WSIs) for developing their algorithms. The developed algorithms were evaluated based on the test set encompassing 100 patients and 500 WSIs. The evaluation metric used was a quadratic weighted Cohen's kappa. We discuss the algorithmic details of the 10 best pre-conference and two post-conference submissions. All these participants used convolutional neural networks in combination with pre- and postprocessing steps. Algorithms differed mostly in neural network architecture, training strategy, and pre- and postprocessing methodology. Overall, the kappa metric ranged from 0.89 to -0.13 across all submissions. The best results were obtained with pre-trained architectures such as ResNet. Confusion matrix analysis revealed that all participants struggled with reliably identifying isolated tumor cells, the smallest type of metastasis, with detection rates below 40%. Qualitative inspection of the results of the top participants showed categories of false positives, such as nerves or contamination, which could be targeted for further optimization. Last, we show that simple combinations of the top algorithms result in higher kappa metric values than any algorithm individually, with 0.93 for the best combination.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Metástase Linfática/diagnóstico por imagem , Linfonodo Sentinela/diagnóstico por imagem , Algoritmos , Neoplasias da Mama/patologia , Feminino , Técnicas Histológicas , Humanos , Metástase Linfática/patologia , Linfonodo Sentinela/patologia
10.
IEEE Trans Med Imaging ; 38(8): 1948-1958, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30624213

RESUMO

Lymph node metastasis is one of the most important indicators in breast cancer diagnosis, that is traditionally observed under the microscope by pathologists. In recent years, with the dramatic advance of high-throughput scanning and deep learning technology, automatic analysis of histology from whole-slide images has received a wealth of interest in the field of medical image computing, which aims to alleviate pathologists' workload and simultaneously reduce misdiagnosis rate. However, the automatic detection of lymph node metastases from whole-slide images remains a key challenge because such images are typically very large, where they can often be multiple gigabytes in size. Also, the presence of hard mimics may result in a large number of false positives. In this paper, we propose a novel method with anchor layers for model conversion, which not only leverages the efficiency of fully convolutional architectures to meet the speed requirement in clinical practice but also densely scans the whole-slide image to achieve accurate predictions on both micro- and macro-metastases. Incorporating the strategies of asynchronous sample prefetching and hard negative mining, the network can be effectively trained. The efficacy of our method is corroborated on the benchmark dataset of 2016 Camelyon Grand Challenge. Our method achieved significant improvements in comparison with the state-of-the-art methods on tumor localization accuracy with a much faster speed and even surpassed human performance on both challenge tasks.


Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Metástase Linfática/diagnóstico por imagem , Algoritmos , Histocitoquímica , Humanos , Linfonodos/diagnóstico por imagem , Linfonodos/patologia , Metástase Linfática/patologia
11.
JAMA ; 318(22): 2199-2210, 2017 12 12.
Artigo em Inglês | MEDLINE | ID: mdl-29234806

RESUMO

Importance: Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective: Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. Design, Setting, and Participants: Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures: Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures: The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results: The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance: In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.


Assuntos
Neoplasias da Mama/patologia , Metástase Linfática/diagnóstico , Aprendizado de Máquina , Patologistas , Algoritmos , Feminino , Humanos , Metástase Linfática/patologia , Patologia Clínica , Curva ROC
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...